Skip to main content

Methodology of the MOOC User Study

A user study was designed around these explainability methods, which set out to assess these methods in a variety of ways. Loosely modeled after the anchoring paper (Ribeiro 2016), the first component exposes respondents to a data point, and one explanatory method. To assess the explanatory power of the method, the respondent’s accuracy at predicting the model’s outcome in subsequent data points is assessed. If the explanation was effective at helping the respondent understand how the model makes its predictions, they should increase the respondent’s ability to accurately predict the model. Following exposure to all the methods across a variety of samples, respondents provide feedback on each.

The respondents consist of 13 undergraduate or recent graduates with varying levels of familiarity with computer science and machine learning. Respondents also demonstrated a variety of opinions of ML going into the survey, from AI skeptics with humanities backgrounds to CS graduates working in the industry, developing AI. Our intention was to broadly assess user reactions to the different explanation methods, in terms of how much they effected user trust, and how easy they are to understand. Respondents were asked to comparatively assess the methods at the end of the survey.

image

Click here for a discussion of the MOOC results and here to see how the ResNet survey methodology differed from MOOC's.